event window
MSED: a multi-modal sleep event detection model for clinical sleep analysis
Olesen, Alexander Neergaard, Jennum, Poul, Mignot, Emmanuel, Sorensen, Helge B. D.
Study objective: Clinical sleep analysis require manual analysis of sleep patterns for correct diagnosis of sleep disorders. Several studies show significant variability in scoring discrete sleep events. We wished to investigate, whether an automatic method could be used for detection of arousals (Ar), leg movements (LM) and sleep disordered breathing (SDB) events, and if the joint detection of these events performed better than having three separate models. Methods: We designed a single deep neural network architecture to jointly detect sleep events in a polysomnogram. We trained the model on 1653 recordings of individuals, and tested the optimized model on 1000 separate recordings. The performance of the model was quantified by F1, precision, and recall scores, and by correlating index values to clinical values using Pearson's correlation coefficient. Results: F1 scores for the optimized model was 0.70, 0.63, and 0.62 for Ar, LM, and SDB, respectively. The performance was higher, when detecting events jointly compared to corresponding single-event models. Index values computed from detected events correlated well with manual annotations ($r^2$ = 0.73, $r^2$ = 0.77, $r^2$ = 0.78, respectively). Conclusion: Detecting arousals, leg movements and sleep disordered breathing events jointly is possible, and the computed index values correlates well with human annotations.
- North America > United States > Illinois > DuPage County > Darien (0.04)
- North America > United States > Hawaii > Honolulu County > Honolulu (0.04)
- North America > United States > District of Columbia > Washington (0.04)
- (7 more...)
- Research Report > Experimental Study (0.93)
- Research Report > New Finding (0.68)
- Health & Medicine > Therapeutic Area > Sleep (1.00)
- Health & Medicine > Therapeutic Area > Neurology (0.67)
- North America > United States > New York > New York County > New York City (0.04)
- North America > United States > New Mexico > Bernalillo County > Albuquerque (0.04)
- North America > United States > California > Napa County (0.04)
An Agent-based Commodity Trading Simulation
Cheng, Shih-Fen (Singapore Management University) | Lim, Yee Pin (Singapore Management University)
In this paper, an event-centric commodity trading simulation powered by the multiagent framework is presented. The purpose of this simulation platform is for training novice traders. The simulation is progressed by announcing news events that affect various aspects of the commodity supply chain. Upon receiving these events, market agents that play the roles of producers, consumers, and speculators would adjust their views on the market and act accordingly. Their actions would be based on their roles and also their private information, and collectively they shape the market dynamics. This simulation has been effectively deployed for several training sessions. We will present the underlying technologies that are employed and discuss the practical significance of such platform.
- Asia > Singapore (0.05)
- North America > United States > Texas (0.04)
- Asia > Middle East > Iraq (0.04)
Congruence between model and human attention reveals unique signatures of critical visual events
Current computational models of bottom-up and top-down components of attention arepredictive of eye movements across a range of stimuli and of simple, fixed visual tasks (such as visual search for a target among distractors). However, todate there exists no computational framework which can reliably mimic human gaze behavior in more complex environments and tasks, such as driving a vehicle through traffic. Here, we develop a hybrid computational/behavioral framework, combining simple models for bottom-up salience and top-down relevance, andlooking for changes in the predictive power of these components at different critical event times during 4.7 hours (500,000 video frames) of observers playing car racing and flight combat video games. This approach is motivated by our observation that the predictive strengths of the salience and relevance models exhibitreliable temporal signatures during critical event windows in the task sequence--for example, when the game player directly engages an enemy plane in a flight combat game, the predictive strength of the salience model increases significantly, while that of the relevance model decreases significantly. Our new framework combines these temporal signatures to implement several event detectors. Critically,we find that an event detector based on fused behavioral and stimulus information (in the form of the model's predictive strength) is much stronger than detectors based on behavioral information alone (eye position) or image information alone(model prediction maps). This approach to event detection, based on eye tracking combined with computational models applied to the visual input, may have useful applications as a less-invasive alternative to other event detection approaches based on neural signatures derived from EEG or fMRI recordings.
- North America > United States > California > Los Angeles County > Los Angeles (0.28)
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.14)
Congruence between model and human attention reveals unique signatures of critical visual events
Current computational models of bottom-up and top-down components of attention are predictive of eye movements across a range of stimuli and of simple, fixed visual tasks (such as visual search for a target among distractors). However, to date there exists no computational framework which can reliably mimic human gaze behavior in more complex environments and tasks, such as driving a vehicle through traffic. Here, we develop a hybrid computational/behavioral framework, combining simple models for bottom-up salience and top-down relevance, and looking for changes in the predictive power of these components at different critical event times during 4.7 hours (500,000 video frames) of observers playing car racing and flight combat video games. This approach is motivated by our observation that the predictive strengths of the salience and relevance models exhibit reliable temporal signatures during critical event windows in the task sequence--for example, when the game player directly engages an enemy plane in a flight combat game, the predictive strength of the salience model increases significantly, while that of the relevance model decreases significantly. Our new framework combines these temporal signatures to implement several event detectors. Critically, we find that an event detector based on fused behavioral and stimulus information (in the form of the model's predictive strength) is much stronger than detectors based on behavioral information alone (eye position) or image information alone (model prediction maps). This approach to event detection, based on eye tracking combined with computational models applied to the visual input, may have useful applications as a less-invasive alternative to other event detection approaches based on neural signatures derived from EEG or fMRI recordings.
- North America > United States > California > Los Angeles County > Los Angeles (0.28)
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.14)
Congruence between model and human attention reveals unique signatures of critical visual events
Current computational models of bottom-up and top-down components of attention are predictive of eye movements across a range of stimuli and of simple, fixed visual tasks (such as visual search for a target among distractors). However, to date there exists no computational framework which can reliably mimic human gaze behavior in more complex environments and tasks, such as driving a vehicle through traffic. Here, we develop a hybrid computational/behavioral framework, combining simple models for bottom-up salience and top-down relevance, and looking for changes in the predictive power of these components at different critical event times during 4.7 hours (500,000 video frames) of observers playing car racing and flight combat video games. This approach is motivated by our observation that the predictive strengths of the salience and relevance models exhibit reliable temporal signatures during critical event windows in the task sequence--for example, when the game player directly engages an enemy plane in a flight combat game, the predictive strength of the salience model increases significantly, while that of the relevance model decreases significantly. Our new framework combines these temporal signatures to implement several event detectors. Critically, we find that an event detector based on fused behavioral and stimulus information (in the form of the model's predictive strength) is much stronger than detectors based on behavioral information alone (eye position) or image information alone (model prediction maps). This approach to event detection, based on eye tracking combined with computational models applied to the visual input, may have useful applications as a less-invasive alternative to other event detection approaches based on neural signatures derived from EEG or fMRI recordings.
- North America > United States > California > Los Angeles County > Los Angeles (0.28)
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.14)